| Name of the Table | Source |
|---|---|
| D1_1_SDG | dashboards.sdgindex.org |
| D2_2_Unemployment_rate | ilo.org |
| D3_0_GDP_per_capita | data.worldbank.org |
| D3_1_Military_expenditure_percent_GDP | data.worldbank.org |
| D3_2_Military_expenditure_percent_gov_exp | data.worldbank.org |
| D4_0_Internet_usage | ourworldindata.org |
| D5_0_Human_freedom_index | cato.org |
| D6_0_Disaters | kaggle.com |
| D7_0_COVID | github.com |
| D8_0_Conflicts | datacatalog.worldbank.org |
Comparative Analysis of SDG Implementation Evolution Worldwide
1 Introduction
1.1 Overview and Motivation
The global significance of the SDGs is our basis. The adoption of the SDGs by the United Nation in 2015 marked a significant global commitment to address pressing issues such as poverty, inequality, climate, change, and more. The fact that these goals were unanimously adopted by 193 member states underscores their importance. This prompted us to ask ourselves, can we evaluate the progress? What has really been done so far? Although the SDGs have attracted considerable attention and backing, it is essential to evaluate the events preceding and following their implementation. Understanding the actions taken and progress made is essential in determining if these global commitments are resulting in tangible enhancements to individuals’ lives. By examining the evolution of all countries and their respective contributions towards achieving the SDGs, we can develop a comprehensive understanding of collective efforts and identify potential disparities or gaps.
1.3 Research questions
Focus on factors: What can explain the state of the countries regarding sustainable development? (we will analyse different factors: scores from the human freedom index, GDP per capita, military expenditures in % of GDP/government expenditure, unemployment rate, internet usage). See data description for more precise information about the factors.
Focus on time: How has the adoption of the SDGs in 2015 influenced the achievement of SDGs? (we want to compare the achievement (SDG scores: there are scores calculated even before the adoption) of the different countries before and after 2015 to see if the adoption of SDG gave a real “push” to sustainable development)
Focus on events: Is the evolution in sustainable development influenced by uncontrollable events, such as economic crisis, health crises and natural disasters? (we will analyse the impact of the COVID, natural disasters and conflicts (# deaths, damages, etc.) on the SDG scores). See data description for more precise information about how the impact of these events are materialized into data.
Focus on relationship between SDGs: How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).
2 Data
2.1 Sources
We are collecting our Data from the sustainability development report (SDG), the international labour organization (ILOSTAT), the World Bank, Our world in data, the CATO institute, one from Kaggle (disasters: we couldn’t find relevant accessible information from somewhere else) and GitHub. We found different datasets containing useful information in relation with the SDGs. The details about these data and the links are presented in the next section. Utilizing the kableExtra package, we provide a comprehensive list and corresponding links to our sources, as outlined below:
2.2 Description
During the wrangling process, we added data to our table (D1_1_SDG) from different other datasets and match them based on the country code, and the year. The tables below show all the variables present in our 9 databases. We will then merge them to have our final table for the analysis.
D1_1_SDG
Our primary database focuses on the Sustainable Development Goals (SDG). Below is a table summarizing the key variables included:
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| overallscore | Overall score on all 17 SDGs (the score are % of achievement of the goals determined by the UN based on several indicators) |
| goal1:goal17 | Score on each SDG except SDG 14 (16 variables) |
| population | Population of the country |
D2_2_Unemployment_rate
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| unemployment.rate | Unemployment rate (% of the population 15 years old and older) |
D3_0_GDP_per_capita
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| GDPpercapita | GDP per capita |
D3_1_Military_expenditure_percent_GDP
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| MilitaryExpenditurePercentGDP | Military expenditures in percentage of GDP |
D4_0_Internet_usage
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| internet.usage | Internet usage (% of the population) |
D5_0_Human_freedom_index
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| region | Part of the world, group of countries (e.g. Eastern Europe, Dub-Saharan Africa, South Asia, etc.) |
| pf_law | Rule of law, mean score of: Procedural justice, Civil, justice, Criminal justice, Rule of law (V-Dem) |
| pf_security | Security and safety, mean score of: Homicide, Disappearances conflicts, terrorism |
| pf_movement | Freedom of movement (V-Dem), Freedom of movement (CLD) |
| pf_religion | Freedom of religion, Religious organization, repression |
| pf_assembly | Civil society entry and exit, Freedom of assembly, Freedom to form/run political parties, Civil society repression |
| pf_expression | Direct attacks on the press, Media and expression (V-Dem), Media and expression (Freedom House), Media and expression (BTI), Media and expression (CLD) |
| pf_identity | Same-sex relationships, Divorce, Inheritance rights, Female genital mutilation |
| ef_gouvernment | Government consumption, Transfers and subsidies, Government investment, Top marginal tax rate, State ownership of assets |
| ef_legal | Judicial independence, Impartial courts, Protection of property rights, Military interference Integrity of the legal system Legal enforcementof contracts, Regulatory costs, Reliability of police |
| ef_money | Money growth, Standard deviation of inflation, Inflation: Most recent year, Freedom to own foreign currency |
| ef_trade | Tariffs, Regulatory trade barriers, Black-market exchange rates, Movement of capital and people |
| ef_regulation | Credit market regulations, Labor market regulations, Business regulations |
D6_0_Disaters
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| continent | Continents touched by the disasters such as floods, ouragan |
| total_deaths | Number of deaths caused by disasters |
| no_injured | Number of injured caused by disasters |
| no_affected | Number of affected caused by disasters |
| no_homeless | Number of homeless caused by disasters |
| total_affected | Total number of affected caused by disasters |
| total_damages | Total of infrastructure damages |
D7_0_COVID
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| deaths_per_million | Number of people dead due to COVID |
| cases_per_million | Number of COVID cases |
| stringency | Government Response Stringency Index: composite measure based on 9 response indicators including school closures, workplace closures, and trave |
D8_0_Conflicts
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| ongoing | Variable coded 1 for more than 25 deaths in intrastate conflict and 0 otherwise according to UCDP/PRIO Armed Conflict Dataset 17.1. |
| sum_deaths | Best estimate of deaths in all categories of violence (non-state, one-sided and state-based) recorded by the Uppsala Conflict Data Program in the country based on the UCDP GED dataset (unpublished 2016 data). The location of these events is used for estimating the extent of violence. |
| pop_affected | Share of population affected by violence in percentage (0 to 100) measured as described above based on population data from CIESIN, the PRIO-GRID structure as well as UCDP GED. |
| area_affected | Area affected by conflict |
| maxintensity | Two different intensity levels are coded: minor armed conflicts (1) and wars (2), Takes the max intensity of conflict in the country so that it is coded 2 if there is at least one war (>=1000 deaths in intrastate conflict) during the year. Data from UCDP/PRIO Armed Conflict Dataset 17.1. |
2.3 Wrangling/cleaning
To accommodate the large scale of the datasets, we pre-cleaned each one prior to merging. This streamlined the process, simplifying the cleaning of the final, combined dataset.
2.3.1 Dataset on SDG
This is our main dataset, that we clean in order to keep the columns containing the following information: country name, country code, year, population, overall score and the SDGs scores.
We start by importing the data and converting it into a DataFrame. Next, we rename the columns and convert the scores into numeric variables.
Code
D1_0_SDG <- read.csv(here("scripts","data","SDG.csv"), sep = ";")
D1_0_SDG <- as.data.frame(D1_0_SDG)
D1_0_SDG <- D1_0_SDG[,1:22]
colnames(D1_0_SDG) <- c("code", "country", "year", "population", "overallscore", "goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal14", "goal15", "goal16", "goal17")
D1_0_SDG[["overallscore"]] <- as.double(gsub(",", ".", D1_0_SDG[["overallscore"]]))
makenumSDG <- function(D1_0_SDG) {
for (i in 1:17) {
varname <- paste("goal", i, sep = "")
D1_0_SDG[[varname]] <- as.double(gsub(",", ".", D1_0_SDG[[varname]]))
}
return(D1_0_SDG)
}
D1_0_SDG <- makenumSDG(D1_0_SDG)We proceed by examining the missing values.
Code
propmissing <- numeric(length(D1_0_SDG))
for (i in 1:length(D1_0_SDG)){
proportion <- mean(is.na(D1_0_SDG[[i]]))
propmissing[i] <- proportion
}
variable_names <- colnames(D1_0_SDG)
prop_missing_data <- data.frame(variable = variable_names, prop_missing = propmissing)
ggplot(prop_missing_data, aes(x = variable, y = prop_missing)) +
geom_bar(stat = "identity", fill = "skyblue", color = "black") +
labs(title = "NAs in the main dataset",
x = "Variable",
y = "Proportion of Missing Values") +
theme_minimal()+
coord_flip()Observing that the ‘population’ column contains numerous NAs, we investigate and discover that missing values are common, as some observations represent regions, not countries. Therefore, we can safely exclude these observations.
Code
SDG0 <- D1_0_SDG |>
group_by(code) |>
select(population) |>
summarize(NaPop = mean(is.na(population))) |>
filter(NaPop != 0)
ggplot(SDG0, aes(x = code, y = NaPop)) +
geom_bar(stat = "identity", fill = "lightgreen", color = "black") +
labs(title = "NAs in population variable are for regions and not countries",
x = "Code",
y = "Proportion of Missing Values") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
D1_0_SDG <- D1_0_SDG %>%
filter(!str_detect(code, "^_"))Now, there are no missing values in the ‘population’ variable, and we observe that it contains information on 166 countries.
We notice that NAs are present in only three SDG scores: 1, 10, and 14. Additionally, when a country has NAs, they occur across all years or not at all. Consequently, we decide to conduct further investigations on these three SDG scores to determine whether to include them in our analysis.
Code
SDG1 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na1 = mean(is.na(goal1)),
Na2 = mean(is.na(goal2)),
Na3 = mean(is.na(goal3)),
Na4 = mean(is.na(goal4)),
Na5 = mean(is.na(goal5)),
Na6 = mean(is.na(goal6)),
Na7 = mean(is.na(goal7)),
Na8 = mean(is.na(goal8)),
Na9 = mean(is.na(goal9)),
Na10 = mean(is.na(goal10)),
Na11 = mean(is.na(goal11)),
Na12 = mean(is.na(goal12)),
Na13 = mean(is.na(goal13)),
Na14 = mean(is.na(goal14)),
Na15 = mean(is.na(goal15)),
Na16 = mean(is.na(goal16)),
Na17 = mean(is.na(goal17))) |>
filter(Na1 != 0 | Na2 != 0 | Na3 != 0| Na4 != 0| Na5 != 0| Na6 != 0| Na7 != 0| Na8 != 0| Na9 != 0| Na10 != 0| Na11 != 0| Na12 != 0| Na13 != 0| Na14 != 0| Na15 != 0| Na16 != 0| Na17 != 0)
result_list <- list()
for (col in names(SDG1)[-1]) {
count_na <- sum(SDG1[[col]] != 0)
temp_df <- data.frame(Goal = col, Count_NA = count_na, stringsAsFactors = FALSE)
result_list <- c(result_list, list(temp_df))
}
result_df <- do.call(rbind, result_list)
ggplot(result_df, aes(x = reorder(Goal, Count_NA), y = Count_NA)) +
geom_bar(stat = "identity", fill = "lightyellow", color = "black") +
labs(title = "Three goals have NAs",
x = "Goal",
y = "Count of Missing Values") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))For goal 1, there are only 9.04% missing values in 15 different countries. Goal 1 being “End poverty”, we decide to keep it and only remove the countries with no information for the analysis.
Code
SDG2 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na1 = mean(is.na(goal1))) |>
filter(Na1 != 0)
country_number <- length(unique(D1_0_SDG$country))
length(unique(SDG2$code))/country_number
#> [1] 0.0904For goal 10, there are only 10.2% missing values in 17 different countries. Goal 10 being “reduced inequalities”, we decide to keep it and only remove the countries with no information for the analysis.
Code
SDG3 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na10 = mean(is.na(goal10))) |>
filter(Na10 != 0)
length(unique(SDG3$code))/country_number
#> [1] 0.102For goal 14, there are 24.1% missing values in 40 different countries. Goal 14 being “life under water”, we decide not to keep it, because other SDG such as “life on earth” and “clean water” already treat similar subjects.
Code
SDG4 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na14 = mean(is.na(goal14))) |>
filter(Na14 != 0)
length(unique(SDG4$code))/country_number
#> [1] 0.241
D1_0_SDG <- D1_0_SDG %>% select(-goal14)We will work with various datasets and merge them using the country code and year as key identifiers. To ensure accurate matching, we first verify that country names are encoded in UTF-8 format. Then, we standardize the names of the countries (requiring a custom match for Turkey) and the country codes, utilizing the countrycode library. Additionally, we compile a list of all country codes from the main database to filter the other datasets. Lastly, we complete the database to include all possible “country, year” combinations, ensuring the total number of rows remains unchanged.
Code
D1_0_SDG$country <- stri_encode(D1_0_SDG$country, to = "UTF-8")
D1_0_SDG <- D1_0_SDG %>%
mutate(country = countrycode(country, "country.name", "country.name", custom_match = c("T�rkiye"="Turkey")))
D1_0_SDG$code <- countrycode(
sourcevar = D1_0_SDG$code,
origin = "iso3c",
destination = "iso3c",
)
list_country <- c(unique(D1_0_SDG$code))
D1_0_SDG_country_list <- D1_0_SDG %>%
filter(code %in% list_country) %>%
select(code, country)
D1_0_SDG_country_list <- D1_0_SDG_country_list %>%
select(code, country) %>%
distinct()Finally, we complete the database to ensure there are no missing pairs of (year, code).
Here are the first few lines of the cleaned dataset on SDG achievement scores:
For this first dataset, we reduced the size from 4,140 observations across 120 variables to 3,818 observations for 21 variables.
As said, this is now our main dataset. All subsequent datasets will be merged with this dataset. Therefore, for all the following datasets, we will make sure that we only keep data for the same countries and years as in this dataset. We have a total of 166 countries and the years range from 2000 to 2022.
2.3.2 Dataset on Unemployment rate
In this dataset, the initial step involves importing the data. Next, we ensure that the names and codes of the countries are formatted in UTF-8, preventing any discrepancies due to mismatches in country names. Following this, we modify the column names and filter the data to include only the relevant countries and years, specifically the years 2000 to 2022, covering 166 countries from our primary dataset.
Code
D2_1_Unemployment_rate <- read.csv(here("scripts","data","UnemploymentRate.csv")) %>%
as.data.frame() %>%
mutate(
country = iconv(ref_area.label, to = "UTF-8", sub = "byte"),
country = countrycode(country, "country.name", "country.name"),
year = time,
`unemployment rate` = obs_value / 100,
age_category = classif1.label,
sex = sex.label
) %>%
select(-ref_area.label, -time, -obs_value, -classif1.label, -sex.label, -source.label, -obs_status.label, -indicator.label) %>%
merge(D1_0_SDG_country_list[, c("country", "code")], by = "country", all.x = TRUE) %>%
filter(year >= 2000 & year <= 2022,
!str_detect(sex, fixed("Male")) & !str_detect(sex, fixed("Female")),
code %in% D1_0_SDG_country_list$code,
age_category == "Age (Youth, adults): 15+") %>%
select(code, country, year, `unemployment rate`) %>%
distinct()Here are the first few lines of the cleaned dataset on Unemployment rate:
For this first dataset, we reduced the size from 82,800 observations across 8 variables to 3,812 observations for 5 variables.
2.3.3 Dataset on GDP military Expenditures
We have three different databases which contain information on each countries over the years. Each year represent one variable. We want to extract three variables for our analysis: GDP per capita, military expenditures in percentage of the GDP and military expenditures in percentage of government expenditures.
Code
GDPpercapita <-
read.csv(here("scripts","data","GDPpercapita.csv"), sep = ";")
MilitaryExpenditurePercentGDP <-
read.csv(here("scripts","data","MilitaryExpenditurePercentGDP.csv"), sep = ";")
MiliratyExpenditurePercentGovExp <-
read.csv(here("scripts","data","MiliratyExpenditurePercentGovExp.csv"), sep = ";")After importing the data, we fill in the missing country codes using the column Indicator.Name, because we realized after some manipulations, that some of the country codes were false, but the next column contained the right ones.
Code
fill_code <- function(data){
data <- data %>%
mutate(Country.Code = ifelse(!grepl("^[A-Z]{3}$", Country.Code), Indicator.Name, Country.Code))
}We create a set of functions that we will apply to each database. First, remove the variables that we don’t need, which are the years before 2000. Second, make sure that the values are numeric and rename the year variables (because they all had an “X” before year number). Third, transform the database from wide to long, in order to match the main database. Fourth, transform the year variable into an integer variable and rearrange and rename the columns to match the ones of the other databases. Then, we apply these transformations to the three databases.
Code
remove <- function(data){
years <- seq(1960, 1999)
removeyears <- paste("X", years, sep = "")
data <- data[, !(names(data) %in% c("Indicator.Name", "Indicator.Code", "X", removeyears))]
}
makenum <- function(data) {
for (i in 2000:2022) {
year <- paste("X", i, sep = "")
data[[year]] <- as.numeric(data[[year]])
}
return(data)
}
renameyear <- function(data) {
for (i in 2000:2022) {
varname <- paste("X", i, sep = "")
names(data)[names(data) == varname] <- gsub("X", "", varname)
}
return(data)
}
wide2long <- function(data) {
data <- pivot_longer(data,
cols = -c("Country.Name", "Country.Code"),
names_to = "year",
values_to = "data")
return(data)
}
yearint <- function(data) {
data$year <- as.integer(data$year)
return(data)
}
nameorder <- function(data) {
colnames(data) <- c("country", "code", "year", "data")
data <- data %>% select(c("code", "country", "year", "data"))
}
cleanwide2long <- function(data){
data <- fill_code(data)
data <- remove(data)
data <- makenum(data)
data <- renameyear(data)
data <- wide2long(data)
data <- yearint(data)
data <- nameorder(data)
}
GDPpercapita <- cleanwide2long(GDPpercapita)
MilitaryExpenditurePercentGDP <- cleanwide2long(MilitaryExpenditurePercentGDP)
MiliratyExpenditurePercentGovExp <- cleanwide2long(MiliratyExpenditurePercentGovExp)We rename the colums with the main information, standardize the country code and remove the countries that are not in our main database. We see that all the 166 countries are there.
Code
GDPpercapita <- GDPpercapita %>%
rename(GDPpercapita = data)
MilitaryExpenditurePercentGDP <- MilitaryExpenditurePercentGDP %>%
rename(MilitaryExpenditurePercentGDP = data)
MiliratyExpenditurePercentGovExp <- MiliratyExpenditurePercentGovExp %>%
rename(MiliratyExpenditurePercentGovExp = data)
GDPpercapita$code <- countrycode(
sourcevar = GDPpercapita$code,
origin = "iso3c",
destination = "iso3c",
)
MilitaryExpenditurePercentGDP$code <- countrycode(
sourcevar = MilitaryExpenditurePercentGDP$code,
origin = "iso3c",
destination = "iso3c",
)
MiliratyExpenditurePercentGovExp$code <- countrycode(
sourcevar = MiliratyExpenditurePercentGovExp$code,
origin = "iso3c",
destination = "iso3c",
)
GDPpercapita <- GDPpercapita %>% filter(code %in% list_country)
length(unique(GDPpercapita$code))
#> [1] 166
MilitaryExpenditurePercentGDP <- MilitaryExpenditurePercentGDP %>% filter(code %in% list_country)
length(unique(MilitaryExpenditurePercentGDP$code))
#> [1] 166
MiliratyExpenditurePercentGovExp <- MiliratyExpenditurePercentGovExp %>% filter(code %in% list_country)
length(unique(MiliratyExpenditurePercentGovExp$code))
#> [1] 166There were only 157 countries that were both in the main SDG dataset and in these 3 datasets, but we suspected that some of the missing countries were in the database but not rightly matched. Indeed, Bahamas was in the database but instead of the code “BHS” there was “The”, for “COD” it was “Dem. Rep.”, for “COG” it was “Rep”, etc. We remarked that the code is in another column of the initial database: “Indicator.Name”. We went back to the initial database and before cleaning it we put the right codes (as seen above) and after rerunning the code we see that we have all our 166 countries from the initial dataset.
Code
list_country_GDP <- c(unique(GDPpercapita$code))
(missing <- setdiff(list_country, list_country_GDP))
#> character(0)We run a first round of investigation of the missing values and find that we have 16.4% for MiliratyExpenditurePercentGovExp, 12.9% for MilitaryExpenditurePercentGDP and 1.31% for GDPpercapita.
Code
mean(is.na(MiliratyExpenditurePercentGovExp$MiliratyExpenditurePercentGovExp))
#> [1] 0.164
mean(is.na(MilitaryExpenditurePercentGDP$MilitaryExpenditurePercentGDP))
#> [1] 0.129
mean(is.na(GDPpercapita$GDPpercapita))
#> [1] 0.01312.3.3.1 GDP per capita
For GDPpercapita, only two countries (SOM and SSD) have a lot of missing values and in total 11 countries countries have missing values.
Code
GDPpercapita1 <- GDPpercapita %>%
group_by(code) %>%
summarize(NaGDP = mean(is.na(GDPpercapita))) %>%
filter(NaGDP != 0)
ggplot(GDPpercapita1, aes(x = reorder(code, NaGDP), y = NaGDP, fill = code)) +
geom_bar(stat = "identity", fill="#FFEDCC", color="black") +
labs(title = "11 countries with NAs for GDP per capita",
x = "Code",
y = "Proportion of Missing Values") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) We plot the evolution of GDPpercapita avec the years for each country containing missing values and distinguish the percentage of missing values with colors.
Code
filtered_data_GDP <- GDPpercapita %>%
filter(code %in% GDPpercapita1$code) # countries with NAs
filtered_data_GDP <- filtered_data_GDP %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(GDPpercapita))) %>% # column % NAs
ungroup()
Evol_Missing_GDP <- ggplot(data = filtered_data_GDP) +
geom_point(aes(x = year, y = GDPpercapita,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 1),
labels = c("0-10%", "10-20%", "30-100%")))) +
labs(title = "Evolution of GDP per capita over time", x = "Year", y = "GDP per capita") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "30-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 4)
print(Evol_Missing_GDP)For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
Code
list_code <- c("AFG", "BTN", "CUB", "STP", "TKM")
for (i in list_code) {
country_data <- GDPpercapita %>% filter(code == i)
interpolated_data <- na.interp(country_data$GDPpercapita)
GDPpercapita[GDPpercapita$code == i, "GDPpercapita"] <- interpolated_data
}2.3.3.2 Military expenditures in percentage of GDP
For MilitaryExpenditurePercentGDP, 12 countries have 100% of missing values. We further investigate and keep them for now, knowing that some of these coutries may also have many missing values in the other databases when wee merge everything and will be dropped later.
Code
MilitaryExpenditurePercentGDP1 <- MilitaryExpenditurePercentGDP %>%
group_by(code) %>%
summarize(NaMil1 = round(mean(is.na(MilitaryExpenditurePercentGDP)),3)) %>%
filter(NaMil1 != 0)
ggplot(MilitaryExpenditurePercentGDP1, aes(x = reorder(code, NaMil1), y = NaMil1, fill = code)) +
geom_bar(stat = "identity", fill="#FFC0CB", color="black") +
labs(title = "Military expenditures in % of GDP: lot of countries have NAs",
x = "Code",
y = "Proportion of Missing Values") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 90, hjust = 1)) We plot the evolution of MilitaryExpenditurePercentGDP along the years for each country containing missing values and distinguish the percentage of missing values with colors.
Code
filtered_data_Mil1 <- MilitaryExpenditurePercentGDP %>%
filter(code %in% MilitaryExpenditurePercentGDP1$code) # countries with NAs
filtered_data_Mil1 <- filtered_data_Mil1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(MilitaryExpenditurePercentGDP))) %>% # Column % NAs
ungroup()
Evol_Missing_Mil1 <- ggplot(data = filtered_data_Mil1) +
geom_line(aes(x = year, y = MilitaryExpenditurePercentGDP,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Military expenditure in % of GDP over time", x = "Years from 2000 to 2022", y = "GDP per capita") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 6) +
theme(strip.text = element_text(size = 6)) +
scale_x_continuous(breaks = NULL) +
scale_y_continuous(breaks = NULL)
print(Evol_Missing_Mil1)For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
Code
list_code <- c("AFG", "BDI", "BEN", "CAF", "CIV", "COD", "GAB", "GMB", "KAZ", "LBN", "LBR", "MNE", "MRT", "NER", "TKJ", "TTO", "ZMB")
for (i in list_code) {
country_data <- MilitaryExpenditurePercentGDP %>% filter(code == i)
interpolated_data <- na.interp(country_data$MilitaryExpenditurePercentGDP)
MilitaryExpenditurePercentGDP[MilitaryExpenditurePercentGDP$code == i, "MilitaryExpenditurePercentGDP"] <- interpolated_data
}2.3.3.3 Military expenditures in percentage of governement expenditures
For MilitaryExpenditurePercentGovExp, 17 countries have 100% of missing values. We further investigate and keep them for now, knowing that some of these coutries may also have many missing values in the other databases when wee merge everything and will be dropped later.
Code
MiliratyExpenditurePercentGovExp1 <- MiliratyExpenditurePercentGovExp %>%
group_by(code) %>%
summarize(NaMil2 = round(mean(is.na(MiliratyExpenditurePercentGovExp)),3)) %>%
filter(NaMil2 != 0)
ggplot(MiliratyExpenditurePercentGovExp1, aes(x = reorder(code, NaMil2), y = NaMil2, fill = code)) +
geom_bar(stat = "identity", fill="#E6E6FA", color="black") +
labs(title = "Military expenditures in % of government expenditures: lot of countries have NAs",
x = "Code",
y = "Proportion of Missing Values") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 90, hjust = 1, size=8)) We plot the evolution of MilitaryExpenditurePercentGovExp along the years for each country containing missing values and distinguish the percentage of missing values with colors.
Code
filtered_data_Mil2 <- MiliratyExpenditurePercentGovExp %>%
filter(code %in% MiliratyExpenditurePercentGovExp1$code) # Countries with NAs
filtered_data_Mil2 <- filtered_data_Mil2 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(MiliratyExpenditurePercentGovExp))) %>% # Column % NAs
ungroup()
Evol_Missing_Mil2 <- ggplot(data = filtered_data_Mil2) +
geom_line(aes(x = year, y = MiliratyExpenditurePercentGovExp,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Military expenditure in % of government expenditures over time", x = "Year from 2000 to 2022", y = "GDP per capita") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 7) +
theme(strip.text = element_text(size = 6)) +
scale_x_continuous(breaks = NULL) +
scale_y_continuous(breaks = NULL)
print(Evol_Missing_Mil2)For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
Code
list_code <- c("AFG", "ARM", "BEN", "BIH", "BLR", "COG", "ECU", "GAB", "GMB", "KAZ", "LBN", "LBR", "MNE", "MWI", "NER", "TTO", "UKR", "ZMB")
for (i in list_code) {
country_data <- MiliratyExpenditurePercentGovExp %>% filter(code == i)
interpolated_data <- na.interp(country_data$MiliratyExpenditurePercentGovExp)
MiliratyExpenditurePercentGovExp[MiliratyExpenditurePercentGovExp$code == i, "MiliratyExpenditurePercentGovExp"] <- interpolated_data
}We now look again at the percentage of missing values for the trhee databases: 14.49% for MiliratyExpenditurePercentGovExp, 11.6% for MilitaryExpenditurePercentGDP and 1.07% for GDPpercapita
Code
mean(is.na(MiliratyExpenditurePercentGovExp$MiliratyExpenditurePercentGovExp))
#> [1] 0.149
mean(is.na(MilitaryExpenditurePercentGDP$MilitaryExpenditurePercentGDP))
#> [1] 0.116
mean(is.na(GDPpercapita$GDPpercapita))
#> [1] 0.0107
D3_1_GDP_per_capita <- GDPpercapita
D3_2_Military_Expenditure_Percent_GDP <- MilitaryExpenditurePercentGDP
D3_3_Miliraty_Expenditure_Percent_Gov_Exp <- MiliratyExpenditurePercentGovExpHere are the first few lines of the cleaned dataset of GDP per capita:
For this dataset, we went from ??? observations for 68 variables to 3818 observations for 4 varibles.
Here are the first few lines of the cleaned dataset of military expenditures in percentage of GDP:
For this dataset, we went from ??? observations for 68 variables to 3818 observations for 4 varibles.
Here are the first few lines of the cleaned dataset of military expenditures in percentage of government expenditures:
2.3.4 Dataset on internet usage
To prepare the dataset on internet usage in the world to be merge with the other data, we first, import the data. Then, we keep only the year that we are interested in (2000 to 2022). We also rename the column and keep only the country that match the list of the countries in the main dataset on the SDG.
Code
D4_0_Internet_usage <- read.csv(here("scripts", "data", "InternetUsage.csv")) %>%
filter(Year >= 2000, Year <= 2022) %>%
rename(
code = Code,
country = Entity,
year = Year,
internet_usage = Individuals.using.the.Internet....of.population.
) %>%
mutate(internet_usage = internet_usage / 100) %>%
filter(code %in% list_country) %>%
select(code, country, year, internet_usage)Here are the first few lines of the cleaned dataset of internet usage:
For this first dataset, we reduced the size from 6,570 observations across 4 variables to 3,433 observations for 4 variables.
2.3.5 Dataset on human freedom index
After importing the data from the CATO Institute website, we noticed that even if the file was called “Human Freedom Index 2022”, the available observations were only going from 2000 up to 2020. We have decided first to modify it in order to match our other datasets, by renaming/encoding/standardizing the columns containing the country names.
Code
data <- read.csv(here("scripts", "data", "human-freedom-index-2022.csv"))
#data in tibble
datatibble <- tibble(data)
# Rename the column countries into country to match the other datbases
names(datatibble)[names(datatibble) == "countries"] <- "country"
# Make sure the encoding of the country names are UTF-8
datatibble$country <- iconv(datatibble$country, to = "UTF-8", sub = "byte")
# standardize country names
datatibble <- datatibble %>%
mutate(country = countrycode(country, "country.name", "country.name"))Once done, we could verify which countries were or were not present between these observations and our main SDG dataset. We have decided to keep the ones that were matching between the two datasets.
Code
# Merge by country name
datatibble <- datatibble %>%
left_join(D1_0_SDG_country_list, by = "country")
datatibble <- datatibble %>% filter(code %in% list_country)
(length(unique(datatibble$code)))
#> [1] 159
# See which ones are missing
list_country_free <- c(unique(datatibble$code))
(missing <- setdiff(list_country, list_country_free))
#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB"
# Turkey was missing but present in the initial database (it was a problem when stadardizing the country names of D1_0SDG_country_list that we corrected) and the other missing countries are:"AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB"
D5_0_Human_freedom_index <- datatibbleThen, we noticed that there were a lot of columns that were not important for us, as we had 141 variables taken into account. So we have decided to keep the ones that refers to the countries informations (such as code, year, ..) and their human freedom scores per category (pf for personnal freedom, ef for economical freedom).
Code
# erasing useless columns to keep only the general ones.
D5_0_Human_freedom_index <- select(D5_0_Human_freedom_index, year, country, region, hf_score, pf_rol, pf_ss, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, pf_score, ef_government, ef_legal, ef_money, ef_trade, ef_regulation, ef_score, code)
D5_0_Human_freedom_index <- D5_0_Human_freedom_index %>%
rename(
pf_law = names(D5_0_Human_freedom_index)[5], # Renames the 5th column to "pf_law"
pf_security = names(D5_0_Human_freedom_index)[6] # Renames the 6th column to "pf_security"
)After renaming the columns pf_law/security for comprehension purpose, we have investigated how are distributed the NA values among the countries and the variables. After having found the percentages of missing values per country and variable, heatmaps revealed themself to be a great tool for visualizing datas.
Code
na_percentage_by_country <- D5_0_Human_freedom_index %>%
group_by(country) %>%
select(-code) %>%
summarise(across(everything(), ~mean(is.na(.))*100))
na_long <- na_percentage_by_country %>%
pivot_longer(
cols = -country,
names_to = "Variable",
values_to = "NA_Percentage"
)
overall_na_percentage <- na_long %>%
group_by(Variable) %>%
summarize(Avg_NA_Percentage = mean(NA_Percentage, na.rm = TRUE)) %>%
arrange(desc(Avg_NA_Percentage))
print(overall_na_percentage)
#> # A tibble: 17 x 2
#> Variable Avg_NA_Percentage
#> <chr> <dbl>
#> 1 ef_money 10.4
#> 2 ef_trade 10.4
#> 3 ef_score 10.4
#> 4 hf_score 10.4
#> 5 pf_score 10.4
#> 6 ef_regulation 9.49
#> 7 ef_government 2.91
#> 8 ef_legal 1.71
#> 9 pf_law 1.44
#> 10 pf_identity 0.299
#> 11 pf_assembly 0
#> 12 pf_expression 0
#> 13 pf_movement 0
#> 14 pf_religion 0
#> 15 pf_security 0
#> 16 region 0
#> 17 year 0Then, for having a better understanding of the situation, we ordered the countries having at least 1 variable containing 50% and more of missing values
Code
na_long <- na_long %>%
group_by(country) %>%
mutate(Count_NA_50_100 = sum(NA_Percentage >= 50 & NA_Percentage <= 100, na.rm = TRUE)) %>%
ungroup() %>%
arrange(desc(Count_NA_50_100))
heatmap_ordered_all <- ggplot(na_long, aes(x = reorder(country, -Count_NA_50_100), y = Variable)) +
geom_tile(aes(fill = NA_Percentage), colour = "white") +
scale_fill_gradient(low = "white", high = "red") +
theme_minimal() +
labs(
title = "Heatmap of NA Percentages per Country and Variable",
x = "Countries",
y = "Variables",
fill = "NA Percentage"
) +
theme(
axis.text.x = element_blank(), # Hide x-axis labels
axis.text.y = element_text(size = 9)
)
print(heatmap_ordered_all)We notice that only some countries look to contain at least 50% of missing values and in addition that most of the missing values are concerning the EF variables (Economic Freedom). Now, we tried to produce another heatmap only containing the ordered countries, and also counting for each one of these country the number of variables with at least 50% of NAs.
Code
na_long_filtered <- na_long %>%
group_by(country) %>%
mutate(Count_NA_50_100 = sum(NA_Percentage >= 50 & NA_Percentage <= 100, na.rm = TRUE)) %>%
filter(Count_NA_50_100 > 0) %>%
ungroup() %>%
arrange(desc(Count_NA_50_100))
heatmap_ordered_filtered <- ggplot(na_long_filtered, aes(x = reorder(country, -Count_NA_50_100), y = Variable)) +
geom_tile(aes(fill = NA_Percentage), colour = "white") +
scale_fill_gradient(low = "white", high = "red") +
theme_minimal() +
labs(
title = "Heatmap of NA Percentages per Country and Variable",
x = "Countries",
y = "Variables",
fill = "NA Percentage"
) +
theme(
axis.text.x = element_text(angle = 90, hjust = 1),
axis.text.y = element_text(size = 7)
)
print(heatmap_ordered_filtered)
country_na_count <- na_long %>%
filter(NA_Percentage >= 50) %>%
group_by(country) %>%
summarise(Count_NA_50_100 = n()) %>%
arrange(desc(Count_NA_50_100))
print(country_na_count)
#> # A tibble: 13 x 2
#> country Count_NA_50_100
#> <chr> <int>
#> 1 Comoros 8
#> 2 Djibouti 8
#> 3 Somalia 8
#> 4 Belarus 6
#> 5 Guinea 6
#> 6 Iraq 6
#> 7 Laos 6
#> 8 Sudan 6
#> 9 Bhutan 5
#> 10 Liberia 5
#> 11 Bahamas 1
#> 12 Belize 1
#> 13 Brunei 1We conclude here that 13 countries were concerned by our selection of 50% and more of missing values. When discussing between us, we came to the conclusion that among these 13 countries, a great part of them were not going to be selected because they had a lot of missing values in our main dataset too. Therefore, we have decided to merge this data with the other datasets and finish the cleaning after.
Here are the first few lines of the partialy cleaned dataset on Human Freedom Index scores:
For this first dataset, we reduced the size from 3’465 observations across 141 variables to 3339 observations for 4 variables.
2.3.6 Dataset on Disasters
For this dataset concerning the Disasters we imported the data from Kaggle as we couldn’t find the original dataset that is private coming from the EOSDIS SYSTEM, an interactive interface for browsing full-resolution, global, daily satellite images from NASA. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.
Code
Disasters <- as.data.frame(read.csv(here("scripts", "data", "Disasters.csv"))) %>%
select(Year, Country, ISO, Location, Continent, Disaster.Subgroup, Disaster.Type, Total.Deaths, No.Injured, No.Affected, No.Homeless, Total.Affected, Total.Damages...000.US..)Because we knew that our file showed all the disasters in each country over the years (between 1970-2021) and that we wanted to focus on a specific period, we filtered our data to show the years between 2000 and 2022. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets.
Code
# Rearrange the columns, changed the type of data, renamed the columns
Rearanged_Disasters <- Disasters %>%
filter(Year >= 2000 & Year <= 2022) %>%
mutate(
code = as.character(ISO),
country = as.character(Country),
year = as.integer(Year),
continent = as.character(Continent),
disaster.subgroup = as.character(Disaster.Subgroup),
disaster.type = as.character(Disaster.Type),
location = as.character(Location),
total.deaths = as.numeric(Total.Deaths),
no.injured = as.numeric(No.Injured),
no.affected = as.numeric(No.Affected),
no.homeless = as.numeric(No.Homeless),
total.affected = as.numeric(Total.Affected),
total.damages = as.numeric(Total.Damages...000.US..)
)We then grouped the data by “year”, “code”, “country” and “continent” and summarize the data. Here you can see that we re-selected specific columns as we saw that our first pre-selection was still too wide and some variables as the disaster.subgroup and disaster.type weren’t pertinent.We arranged the columns based on “code,” “country,” “year,” and “continent” to match the other datasets.
Code
Disasters <- Rearanged_Disasters %>%
group_by(year,code, country, continent) %>%
summarize(
total_deaths = sum(total.deaths, na.rm = TRUE),
no_injured = sum(no.injured, na.rm = TRUE),
no_affected = sum(no.affected, na.rm = TRUE),
no_homeless = sum(no.homeless, na.rm = TRUE),
total_affected = sum(total.affected, na.rm = TRUE),
total_damages = sum(total.damages, na.rm = TRUE)
)
D6_0_Disasters <- Disasters %>%
select(code, country, year, continent, total_deaths, no_injured, no_affected, no_homeless, total_affected, total_damages) %>%
arrange(code, country, year, continent)Finally we filtered our disasters data to keep only the countries that are present in our main dataset. We analysed the missing countries and identified three countries (BHR, BRN, MLT) that are unexpectedly missing.
Code
D6_0_Disasters <- D6_0_Disasters %>% filter(code %in% list_country)
length(unique(D6_0_Disasters$code))
#> [1] 163
# Here we see which countries are missing
list_country_disasters <- c(unique(D6_0_Disasters$code))
(missing <- c(missing,setdiff(list_country, list_country_disasters)))
#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB" "BHR" "BRN" "MLT"Here are the first few lines of the cleaned dataset on Disasters:
2.3.7 Dataset on COVID
This dataset contains information on the COVID19 pandemic between 2020 and 2022. The observation are by year, month, day. After importing the database, we transform the date in format YYYY-MM-DD in order to only keep the year.
Code
COVID <- read.csv(here("scripts", "data", "COVID.csv")) %>%
select(iso_code, location, date, new_cases_per_million, new_deaths_per_million, stringency_index) %>%
mutate(date = as.integer(year(date)))We perform a first round of investigation of the missing values before aggregating the values by year. We begin with the variables “cases per million” and “deaths per million”: seeing that for each country, we have either only missing values, either a very low percentage of missing values (~1%), we can compute the sum over each year and ignore the missing values without altering the data. Indeed, where all the values are missing, the computation will return a NA. We then look at the “stringency” variable and we have 3 scenarios:
~20% missings: we ignore missing values when computing the mean to have an idea of stringency each year (because we compute the mean stringency over the year, if some days are missing, it is not a problem, it can not evoluate that fast).
all are missing: we can ignore the missing values when computing the mean, because it will still return a missing value
almost all are missing: here the mean doesn’t make sense -> we will replace the values by NAs to be coherent. The countries with this issues are: ERI, GUM, PRI and VIR. We verify if they are in our main dataset and since none of these countries are, we can ignore the issue, the lines will be remove later anyway.
We aggregate the observations of all days of a year in one observation per country using the mean.
Code
COVID1 <- COVID %>%
group_by(iso_code) %>%
summarize(NaDeaths = round(mean(is.na(new_deaths_per_million)),3),
NaCases = round(mean(is.na(new_cases_per_million)), 3),
NaStringency = round(mean(is.na(stringency_index)), 3)) %>%
pivot_longer(cols = starts_with("Na"), names_to = "Variable", values_to = "NaValue")%>%
filter(NaValue!=0)
issue_list <- c("ERI", "GUM", "PRI", "VIR")
is.element(issue_list, list_country)
#> [1] FALSE FALSE FALSE FALSE
COVID <- COVID %>%
group_by(location, date) %>%
mutate(
cases_per_million = sum(new_cases_per_million, na.rm = TRUE),
deaths_per_million = sum(new_deaths_per_million, na.rm = TRUE),
stringency = mean(stringency_index, na.rm = TRUE)
)%>%
ungroup()
###
# Create a bubble plot
plot_ly(COVID1, x=~Variable, y=~NaValue,
type = "scatter",
marker = list(color="blue", opacity=0.1, size = 10))%>%
layout(
title = "NAs for the COVID variables",
titlefont = list(size = 20),
xaxis = list(title = "Variables"),
yaxis = list(title = "% NAs"),
annotations = list(
text = "The more the circile is blue the more countries are concerned",
x = 0.5, y = 1.02, # Adjust the position of the subtitle
xref = "paper", yref = "paper",
showarrow = FALSE,
font = list(size = 16) # Adjust the size of the subtitle font
)
)Now that all the variables of interest are aggregated by year, we remove all the variables that we don’t need and rename all the remaining variables to match the main dataset.
Code
COVID <- COVID %>%
group_by(location, date) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup()
COVID <- COVID %>% select(-c(new_cases_per_million, new_deaths_per_million, stringency_index))
colnames(COVID) <- c("code", "country", "year", "cases_per_million", "deaths_per_million", "stringency")We remove the years that exceed 2022, we make sure that the country codes are all iso codes with 3 letters (we observe that sometimes they are preceded by “OWID_”) and we standardize the country codes.
Code
COVID <- COVID[COVID$year <= 2022, ]
COVID$code <- gsub("OWID_", "", COVID$code)
COVID$code <- countrycode(
sourcevar = COVID$code,
origin = "iso3c",
destination = "iso3c"
)We remove the observations of countries that aren’t in our main dataset on SDGs and find that all the 166 countries that we have in the main SDG dataset are also in this one.
Code
COVID <- COVID %>% filter(code %in% list_country)
length(unique(COVID$code))
#> [1] 166We perform a second round of missing values investigation and find out that there are no missing values except for the stringency, where there are 4.19%. Either all values are missing for one country, or 50% are missing, so these 7 countries won’t be included when analyzing the effect of stringency on the SDG scores.
Code
mean(is.na(COVID$cases_per_million))
#> [1] 0
mean(is.na(COVID$deaths_per_million))
#> [1] 0
mean(is.na(COVID$stringency))
#> [1] 0.0419
COVID4 <- COVID %>%
group_by(code) %>%
summarize(NaCOVID = mean(is.na(stringency))) %>%
filter(NaCOVID != 0)
ggplot(COVID4, aes(x = reorder(code, NaCOVID), y = NaCOVID)) +
geom_bar(stat = "identity", fill = "lightgreen", color = "black") +
labs(title = "Stringency has either 100% or 50% NAs or 0%",
x = "Code",
y = "Proportion of Missing Values") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
D7_0_COVID <- COVIDHere are the first few lines of the cleaned dataset on COVID19:
2.3.8 Dataset on Conflicts
For our conflicts dataset, we imported the data from “The World Banck” data catalog. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.
Code
Conflicts <- read.csv(here("scripts", "data", "Conflicts.csv")) %>%
as.data.frame() %>%
select(year, country, ongoing, gwsum_bestdeaths, pop_affected,
peaceyearshigh, area_affected, maxintensity, maxcumulativeintensity)Our file showed all the Conflicts and consequences per country over the years (between 2000-2016). We couldn’t find a better and more complete dataset, As we consider conflicts as events, we will only take into account results between 2000 and 2016. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets. We grouped the data by ” year”, “country”, re-selected some variables and summarize the data.
Code
Rearanged_Conflicts <- Conflicts %>%
filter(year >= 2000 & year <= 2022)%>%
mutate(
ongoing = as.integer(ongoing),
country = as.character(country),
year = as.integer(year),
gwsum_bestdeaths = as.numeric(gwsum_bestdeaths),
pop_affected = as.numeric(pop_affected),
area_affected = as.numeric(area_affected),
maxintensity = as.numeric(maxintensity),
)
# Group the data by "year", "country" and summarize the data
Conflicts <- Rearanged_Conflicts %>%
group_by(year, country) %>%
summarize(
ongoing = sum (ongoing, na.rm = TRUE),
sum_deaths = sum(gwsum_bestdeaths, na.rm = TRUE),
pop_affected = sum(pop_affected, na.rm = TRUE),
area_affected = sum(area_affected, na.rm = TRUE),
maxintensity = sum(maxintensity, na.rm = TRUE),
)After we Selected specific columns from the summarized data and arrange the data by our specified columns. To make our dataset compatible with the main one and let the merging face succeed, we dd some adjustment concerning the country names’ to ensure the compatibility. Then we standardize and merge by country names to finally rearrange the data to retain only the countries present in our main dataset. Note that in the end we can see that only one country is missing that wasn’t in the initial conflicts database: BLR
Code
conflicts <- Conflicts %>%
select(country, year, ongoing, sum_deaths, pop_affected, area_affected, maxintensity) %>%
arrange(country, year)
conflicts$country <- iconv(conflicts$country, to = "UTF-8", sub = "byte")
conflicts <- conflicts %>%
mutate(country = countrycode(country, "country.name", "country.name"))
conflicts <- conflicts %>%
left_join(D1_0_SDG_country_list, by = "country")
conflicts <- conflicts %>%
select(code, country, year, ongoing, sum_deaths, pop_affected, area_affected, maxintensity) %>%
arrange(code, country, year)
D8_0_Conflicts <- conflicts %>% filter(code %in% list_country)
(length(unique(conflicts$code)))
#> [1] 166
# See which countries are missing
list_country_conflicts <- c(unique(conflicts$code))
(missing <- c(missing, setdiff(list_country, list_country_conflicts)))
#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB" "BHR" "BRN" "MLT"
#> [11] "BLR"Here are the first few lines of the cleaned dataset on Conflicts:
2.3.9 Merge data
By merging our eight pre-cleaned datasets, we create a final database.
Code
D2_1_Unemployment_rate$country <- NULL
merge_1_2 <- D1_0_SDG |> left_join(D2_1_Unemployment_rate, join_by(code, year))
D3_1_GDP_per_capita$country <- NULL
merge_12_3 <- merge_1_2 |> left_join(D3_1_GDP_per_capita, join_by(code, year))
D3_2_Military_Expenditure_Percent_GDP$country <- NULL
merge_12_3 <- merge_12_3 |> left_join(D3_2_Military_Expenditure_Percent_GDP, join_by(code, year))
D3_3_Miliraty_Expenditure_Percent_Gov_Exp$country <- NULL
merge_12_3 <- merge_12_3 |> left_join(D3_3_Miliraty_Expenditure_Percent_Gov_Exp, join_by(code, year))
D4_0_Internet_usage$country <- NULL
merge_123_4 <- merge_12_3 |> left_join(D4_0_Internet_usage, join_by(code, year))
D5_0_Human_freedom_index$country <- NULL
merge_1234_5 <- merge_123_4 |> left_join(D5_0_Human_freedom_index, join_by(code, year))
D6_0_Disasters$country <- NULL
merge_12345_6 <- merge_1234_5 |> left_join(D6_0_Disasters, join_by(code, year))
D7_0_COVID$country <- NULL
D7_0_COVID <- D7_0_COVID |> distinct(code, year, .keep_all = TRUE)
merge_123456_7 <- merge_12345_6 |> left_join(D7_0_COVID, join_by(code, year))
D8_0_Conflicts$country <- NULL
all_Merge <- merge_123456_7 |> left_join(D8_0_Conflicts, join_by(code, year))
all_Merge <- all_Merge %>% filter(!code %in% missing)2.3.10 Cleaning of the final database
We replace the NAs of the COVID columns by 0 (because we don’t have real missing, only introduced by merging for the years before COVID).
Code
all_Merge <- all_Merge %>%
mutate(
cases_per_million = ifelse(is.na(cases_per_million), 0, cases_per_million),
deaths_per_million = ifelse(is.na(deaths_per_million), 0, deaths_per_million),
stringency = ifelse(is.na(stringency), 0, stringency)
)Since we took the information on the continent and region from databases that are not the main one, we complete these inforamtion for the whole final dataset.
Code
all_Merge <- all_Merge %>%
group_by(country) %>%
mutate(continent = ifelse(is.na(continent), first(na.omit(continent)), continent)) %>%
ungroup()
all_Merge <- all_Merge %>%
group_by(country) %>%
mutate(region = ifelse(is.na(region), first(na.omit(region)), region)) %>%
ungroup()We order the database, beginning by the information on the country, the year, the continent and the region.
Code
all_Merge <- all_Merge %>%
select(code, year, country, continent, region, everything())
write.csv(all_Merge, file = here("scripts","data","all_Merge.csv"))Here are the first few lines of the final dataset:
Final structure of our merged database: each country of the 166 countries from D1_1_SDG are observed each year from 2000 to 2022, thus each row has a key composed of (code, year) that uniquely identifies an observation. The other columns are the variables listed above. Due to some countries having a lot of missing information we will have to eliminate some of them, but we will still have more than 2000 rows in our database.
2.3.11 Treatment of missing values
We load our final database and we vizualize the missing values.
Code
all_Merge <- read.csv(here("scripts","data","all_Merge.csv"))
all_Merge <- all_Merge %>% select(-c(X))
# Create a dataframe with the goals without NAs summarize in one column to simplify the visualization
goal_vars <- all_Merge %>%
select(starts_with("goal")) %>%
filter_all(all_vars(!is.na(.))) %>%
colnames()
to_plot_missing <- all_Merge %>%
mutate(Goals_without_NAs = rowSums(!is.na(select(., all_of(goal_vars))))) %>%
select(-c(goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal11, goal12, goal13, goal15, goal16, goal17))
vis_dat(to_plot_missing, warn_large_data = FALSE) + scale_fill_brewer(palette = "Paired") +
theme(
axis.text.x = element_text(angle = 90, size = 6),
legend.text = element_text(size = 8), # Adjust the size of legend text
legend.title = element_text(size = 10)
)We subset our database according to the data that we will need in order to answer the different questions. This will help us dealing with the missing values.
For question 1, we only keep the years until 2020, because most of the explanatory variables that we want to use (those coming from the human freedom index) only have values until 2020.
Code
data_question1 <- all_Merge %>%
filter(year<=2020) %>%
select(-c(total_deaths, no_injured, no_affected, no_homeless, total_affected, total_damages, cases_per_million, deaths_per_million, stringency, ongoing, sum_deaths, pop_affected, area_affected, maxintensity))For question 2 and 4, we use the main data from the SDG database.
Code
data_question24 <- all_Merge %>%
select(c(code, year, country, continent, region, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17))For question 3, we create 3 distinct databases according to the different type of event that we wwill analyse: disasters, COVID19 and conflicts. For the disasters, we only keep the years until 2021, because after this date, we don’t have data. For the conflicts, we only keep the years until 2016, because after this date, we don’t have data.
Code
# Disasters
data_question3_1 <- all_Merge %>%
filter(year<=2021) %>%
select(c(code, year, country, continent, region, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal7, total_deaths, no_injured, no_affected, no_homeless, total_affected, total_damages))
# COVID
data_question3_2 <- all_Merge %>%
select(c(code, year, country, continent, region, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal7, cases_per_million, deaths_per_million, stringency))
# Conflicts
data_question3_3 <- all_Merge %>%
filter(year<=2016) %>%
select(c(code, year, country, continent, region, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal7, ongoing, sum_deaths, pop_affected, area_affected, maxintensity))2.3.11.1 Data for question 1
We begin by visualizing the missing values. To have a less messy graph we group all the goals wihtout NAs into one single variable.
Code
# Create a dataframe with the goals without NAs summarize in one column to simplify the visualization
variable_names <- names(data_question1)
missing_percentages <- sapply(data_question1, function(col) mean(is.na(col)) * 100)
missing_data_summary <- data.frame(
Variable = variable_names,
Missing_Percentage = missing_percentages
)
missing_data_summary <- missing_data_summary %>%
mutate(VariableGroup = ifelse(startsWith(Variable, "goal") & Missing_Percentage == 0, "Goals without NAs", as.character(Variable)))
ggplot(data = missing_data_summary, aes(x = reorder(VariableGroup, Missing_Percentage), y = Missing_Percentage, fill = Missing_Percentage)) +
geom_bar(stat = "identity") +
geom_text(aes(label = ifelse(Missing_Percentage > 1, sprintf("%.1f%%", Missing_Percentage), ""),
y = Missing_Percentage),
position = position_stack(vjust = 1), # Adjust vertical position
color = "white", # Text color
size = 2, # Text size
hjust = 1.05) +
labs(title = "Percentage of Missing Values by Variable",
x = "Variable",
y = "Missing Percentage") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size=6 ),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
labs(fill = "% NAs") +
coord_flip()We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. We decide to remove the countries that have more than 50 missing values.
Code
see_missing1_1 <- data_question1 %>%
group_by(code) %>%
summarise(across(-c(year, country, continent, region, population, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17),
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 50))
data_question1 <- data_question1 %>% filter(!code %in% see_missing1_1$code)
list_country_deleted <- c(unique(see_missing1_1$code))Here is the graph that allows us to visualize the countries that have missing values, how many and for which variables, when there are more than 50 NAs in total.
Code
ggplot(see_missing1_1, aes(x = num_missing , y = reorder(code, num_missing), fill = num_missing)) +
geom_bar(stat = "identity") +
scale_fill_gradient(low = "lightgreen", high = "darkgreen") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size=8 ),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
labs(title = "Number of missing values per country containing at least 50 NAs", x = "Number of Missing Values", y = "Countries")Now, looking at the remaining countries that have missing values and there number across all variables, we decide to remove MilitaryExpenditurePercentGovExp, because it has too many missing values and it contains similar information to MilitaryExpenditurePercentGDP.We also remove hf_score, pf_score and ef_score, because there are many missing values and since these variables summarize the other ones, deleting them will not make us loose information.
Code
see_missing1_2 <- data_question1 %>%
group_by(code) %>%
summarise(across(-c(year, country, continent, region, population, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17),
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))
data_question1 <- data_question1 %>% select(-c(MiliratyExpenditurePercentGovExp, hf_score, pf_score, ef_score))Here is the ggplot that helps us to visualize the countries that have missing values after removing the countries with more than 50 NAs.
Code
ggplot(see_missing1_2, aes(x = num_missing , y = reorder(code, num_missing), fill = num_missing)) +
geom_bar(stat = "identity", width = 0.5) +
scale_fill_gradient(low = "lightgreen", high = "darkgreen") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size= 6 ),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
labs(title = "Number of missing values per country", x = "Number of Missing Values", y = "Countries")We also look at patterns of missing values in the rows and see that except for the two goals with NAs that we discussed earlier, there are not much patterns. ::: {.cell layout-align=“center”}
Code
naniar::gg_miss_upset(data_question1, nsets=10, nintersects=11):::
2.3.11.1.1 GDP per capita
Only Venezuela has missing values that we can not fill, so we delete the country.
Code
question1_missing_GDP <- data_question1 %>%
group_by(code) %>%
summarize(NaGDPpercapita = mean(is.na(GDPpercapita)))%>%
filter(NaGDPpercapita != 0)
data_question1 <- data_question1 %>% filter(code!="VEN")
list_country_deleted <- c(list_country_deleted, "VEN")2.3.11.1.2 Military expenditure in % of GDP
To begin with, we delete the countries with more than 30% missing values.
Code
question1_missing_Military <- data_question1 %>%
group_by(code) %>%
summarize(NaMilitary = mean(is.na(MilitaryExpenditurePercentGDP)))%>%
filter(NaMilitary != 0)
data_question1 <- data_question1 %>% filter(code!="BRB" & code!="CRI" & code!="HTI" & code!="ISL" & code!="PAN" & code!="SYR")
list_country_deleted <- c(list_country_deleted, "BRB", "CRI", "HTI", "ISL", "PAN", "SYR") Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values, where there are less than 30% missing using the median by region.
Code
question1_missing_Military <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(MilitaryExpenditurePercentGDP))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_Military <- ggplot(data = question1_missing_Military) +
geom_histogram(aes(x = MilitaryExpenditurePercentGDP,
fill = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
bins = 30) +
labs(title = "Distribution of Military expenditures in % of GDP", x = "Military expenditures in % of GDP", y = "Frequency") +
scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
guides(fill = guide_legend(title = "% missings")) +
facet_wrap(~ region, nrow = 3)
print(Freq_Missing_Military)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(MilitaryExpenditurePercentGDP))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(MilitaryExpenditurePercentGDP, na.rm = TRUE),
MilitaryExpenditurePercentGDP = ifelse(
PercentageMissingByCode < 0.3 & !is.na(MilitaryExpenditurePercentGDP),
MilitaryExpenditurePercentGDP,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, MilitaryExpenditurePercentGDP)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion)2.3.11.1.3 Internet usage
There are only low percentage of missing values.
Code
question1_missing_Internet <- data_question1 %>%
group_by(code) %>%
summarize(NaInternet = mean(is.na(internet_usage)))%>%
filter(NaInternet != 0)We look at the evolution of the variable over time. We fill the missing values with linear interpolation, because all evolutions are in an increasing way and are almost straight lines, except for CIV that we delete.
Code
question1_missing_Internet <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(internet_usage))) %>% # Column % NAs
filter(code %in% question1_missing_Internet$code)
Evol_Missing_Internet <- ggplot(data = question1_missing_Internet) +
geom_line(aes(x = year, y = internet_usage,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Evolution of internet usage over time", x = "Years from 2000 to 2022", y = "Internet usage") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
scale_x_continuous(breaks=NULL)+
facet_wrap(~ code, nrow = 4)
print(Evol_Missing_Internet)
list_code <- setdiff(unique(question1_missing_Internet$code), "CIV")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$internet_usage)
data_question1[data_question1$code == i, "internet_usage"] <- interpolated_data
}
data_question1 <- data_question1 %>% filter(code!="CIV")
list_country_deleted <- c(list_country_deleted, "CIV") 2.3.11.1.4 Human freedom index
2.3.11.1.4.1 Personal freedom: law
The variable pf_law has (many) NAs, but only for one country: BLZ, so we decide to remove it.
Code
data_question1 <- data_question1 %>%
filter(code!="BLZ")
list_country_deleted <- c(list_country_deleted, "BLZ") 2.3.11.1.4.2 Economic freedom: government
Only KGZ and SRB have missing values, we plot the values over time and fill in the missing values by the year before, since there are only one and two missing values.
Code
data_question1 %>%
filter(code %in% c("KGZ", "SRB")) %>%
ggplot(aes(x = year, y = ef_government)) +
geom_point(color = "green") +
facet_wrap(~ code, nrow = 1) +
labs(title = "Evolution of economic freedom: government over time", x = "Years", y = "ef_gov")
data_question1 <- data_question1 %>%
mutate(ef_government = ifelse(code == "KGZ" & year == 2000 & is.na(ef_government), ef_government[which(code == "KGZ" & year == 2001)], ef_government))
data_question1 <- data_question1 %>%
mutate(ef_government = ifelse(code == "SRB" & year == 2000 & is.na(ef_government), ef_government[which(code == "SRB" & year == 2002)], ef_government))
data_question1 <- data_question1 %>%
mutate(ef_government = ifelse(code == "SRB" & year == 2001 & is.na(ef_government), ef_government[which(code == "SRB" & year == 2002)], ef_government))2.3.11.1.4.3 Economic freedom: money
18 countries have missing values, but the percentage of missing values is always below 25%.
Code
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
summarize(Na_ef_money = mean(is.na(ef_money)))%>%
filter(Na_ef_money != 0)We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
Code
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_money))) %>% # Column % NAs
filter(code %in% question1_missing_ef_money$code)
Evol_Missing_ef_money <- ggplot(data = question1_missing_ef_money) +
geom_line(aes(x = year, y = ef_money,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Evolution of economic freedom: money over time", x = "Years from 2000 to 2022", y = "ef_money") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 4) +
scale_x_continuous(breaks = NULL)
print(Evol_Missing_ef_money)
list_code <- c("ARM", "BFA", "BIH", "GEO", "KAZ", "LSO", "MDA", "MKD")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$ef_money)
data_question1[data_question1$code == i, "ef_money"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
Code
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_money))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_ef_money <- ggplot(data = question1_missing_ef_money) +
geom_histogram(aes(x = ef_money,
fill = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
bins = 30) +
labs(title = "Distribution of economic freedom: money", x = "ef_money", y = "Frequency") +
scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
guides(fill = guide_legend(title = "% missings")) +
facet_wrap(~ region, nrow = 2)
print(Freq_Missing_ef_money)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(ef_money))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(ef_money, na.rm = TRUE),
ef_money = ifelse(
PercentageMissingByCode < 0.3 & !is.na(ef_money),
ef_money,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, ef_money)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion)2.3.11.1.4.4 Economic freedom: trade
19 countries have missing values, but the percentage of missing values is always below 25%.
Code
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
summarize(Na_ef_trade = mean(is.na(ef_trade)))%>% # Column % NAs
filter(Na_ef_trade != 0)
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_trade))) %>%
filter(code %in% question1_missing_ef_trade$code)We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
Code
Evol_Missing_ef_trade <- ggplot(data = question1_missing_ef_trade) +
geom_line(aes(x = year, y = ef_trade,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Evolution of economic freedom: trade over time", x = "Years from 2000 to 2022", y = "ef_trade") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 4) +
scale_x_continuous(breaks = NULL)
print(Evol_Missing_ef_trade)
# Linear interpolation for "AZE", "BFA", "ETH", "GEO", "VNH"
list_code <- c("AZE", "BFA", "ETH", "GEO", "VNH")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$ef_trade)
data_question1[data_question1$code == i, "ef_trade"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
Code
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_trade))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_ef_trade <- ggplot(data = question1_missing_ef_trade) +
geom_histogram(aes(x = ef_trade,
fill = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
bins = 30) +
labs(title = "Distribution of economic freedom: trade", x = "ef_trade", y = "Frequency") +
scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
guides(fill = guide_legend(title = "% missings")) +
facet_wrap(~ region, nrow = 2)
print(Freq_Missing_ef_trade)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(ef_trade))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(ef_trade, na.rm = TRUE),
ef_trade = ifelse(
PercentageMissingByCode < 0.3 & !is.na(ef_trade),
ef_trade,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, ef_trade)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion)2.3.11.1.4.5 Economic freedom: regulation
12 countries have missing values, but the percentage of missing values is always below 25%.
Code
question1_missing_ef_regulation <- data_question1 %>%
group_by(code) %>%
summarize(Na_ef_regulation = mean(is.na(ef_regulation)))%>% # Column % NAs
filter(Na_ef_regulation != 0)We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
Code
question1_missing_ef_regulation <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_regulation))) %>%
filter(code %in% question1_missing_ef_regulation$code)
Evol_Missing_ef_regulation <- ggplot(data = question1_missing_ef_regulation) +
geom_line(aes(x = year, y = ef_regulation,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Evolution of economic freedom: regulation over time", x = "Years from 2000 to 2022", y = "ef_regulation") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
scale_x_continuous(breaks = NULL)+
facet_wrap(~ code, nrow = 2)
print(Evol_Missing_ef_regulation)
list_code <- c("ETH", "KAZ", "MDA", "SRB")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$ef_regulation)
data_question1[data_question1$code == i, "ef_regulation"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
Code
question1_missing_ef_regulation <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_regulation))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_ef_regulation <- ggplot(data = question1_missing_ef_regulation) +
geom_histogram(aes(x = ef_regulation,
fill = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
bins = 30) +
labs(title = "Distribution of economic freedom: regulation", x = "ef_regulation", y = "Frequency") +
scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
guides(fill = guide_legend(title = "% missings")) +
facet_wrap(~ region, nrow = 1)
print(Freq_Missing_ef_regulation)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(ef_regulation))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(ef_regulation, na.rm = TRUE),
ef_regulation = ifelse(
PercentageMissingByCode < 0.3 & !is.na(ef_regulation),
ef_regulation,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, ef_regulation)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion) %>%
ungroup()Now, we notice that there were only missing values for goals 1 and 10. As we did before, we have started to investigate where are located the NAs in our dataset for first goal1, then goal 10.
Code
na_count <- sapply(data_question1, function(x) sum(is.na(x)))
na_count_df <- data.frame(variable = names(na_count), num_missing = na_count)
na_count_df_filtered <- subset(na_count_df, num_missing > 0)
ggplot(na_count_df_filtered, aes(x= num_missing,y=variable, fill = num_missing)) +
geom_bar(aes(fill = num_missing), stat = "identity", width = 0.8, fill = 'lightblue') +
geom_text(aes(label = num_missing), vjust = 0.5,hjust = 1.1, position = position_dodge(width = 0.9)) +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size=10 ),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
labs(title = "Number of remaining missing values per variable ",
x = "Number of NAs",
y = "Variables")
# goal1
question1_missing_goal1 <- data_question1 %>%
group_by(code) %>%
summarize(Na_goal1 = mean(is.na(goal1)))%>%
filter(Na_goal1 != 0)
data_question1 <- data_question1 %>% filter(!code %in% question1_missing_goal1$code)
# Update List of countries deleted
list_country_deleted <- c(list_country_deleted, "KWT","NZL","OMN","SGP","UKR")
# still 42 NA values goal10We had found that the missing values were located in only 5 countries. So we have decided to get rid of them. At this stage, there were only 42 remaining missing values. Then, same step for goal 10.
Code
#goal10
question1_missing_goal10 <- data_question1 %>%
group_by(code) %>%
summarize(Na_goal10 = mean(is.na(goal10)))%>%
filter(Na_goal10 != 0)
data_question1 <- data_question1 %>% filter(!code %in% question1_missing_goal10$code)
# Update List of countries deleted
list_country_deleted <- c(list_country_deleted, "GUY","TTO")We have found the 2 lasts contries containing missing values. Now, our dataset is completely clean and ready to be used for our question 1.
2.3.11.2 Data for question 2 and 4
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Code
see_missing24 <- data_question24 %>%
group_by(code) %>%
summarise(across(everything(), ~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))2.3.11.3 Data for question 3
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Disasters
We begin by visualizing the missing values.
Code
variable_names <- names(data_question3_1)
missing_percentages <- sapply(data_question3_1, function(col) mean(is.na(col)) * 100)
missing_data_summary <- data.frame(
Variable = variable_names,
Missing_Percentage = missing_percentages
)
missing_data_summary <- missing_data_summary %>%
mutate(VariableGroup = ifelse(startsWith(Variable, "goal") & Missing_Percentage == 0, "Goals without NAs", as.character(Variable)))
ggplot(data = missing_data_summary, aes(x = reorder(VariableGroup, Missing_Percentage), y = Missing_Percentage, fill = Missing_Percentage)) +
geom_bar(stat = "identity") +
geom_text(aes(label = ifelse(Missing_Percentage > 1, sprintf("%.1f%%", Missing_Percentage), ""),
y = Missing_Percentage),
position = position_stack(vjust = 1), # Adjust vertical position
color = "white", # Text color
size = 3, # Text size
hjust = 1.05) +
labs(title = "Percentage of Missing Values by Variable",
x = "Variable",
y = "Missing Percentage") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1)) +
coord_flip()We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. We find out that there are many missing values and here are the first few lines identifying them by country.
Code
see_missing3_1 <- data_question3_1 %>%
group_by(code) %>%
summarise(across(-c(goal1, goal10), # Exclude columns "goal1" and "goal10"
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))
for_kable <- head(see_missing3_1, 10)
kable(for_kable)| code | year | country | continent | region | overallscore | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal11 | goal12 | goal13 | goal15 | goal16 | total_deaths | no_injured | no_affected | no_homeless | total_affected | total_damages | num_missing |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| AGO | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 6 |
| ALB | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 | 9 | 9 | 9 | 9 | 9 | 54 |
| ARE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 21 | 21 | 21 | 21 | 21 | 21 | 126 |
| ARM | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 | 15 | 15 | 15 | 15 | 15 | 90 |
| AUT | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | 8 | 8 | 8 | 8 | 8 | 48 |
| AZE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 17 | 17 | 17 | 17 | 17 | 17 | 102 |
| BDI | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 3 | 3 | 3 | 3 | 3 | 18 |
| BEL | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 5 | 5 | 5 | 5 | 5 | 30 |
| BEN | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | 7 | 7 | 7 | 7 | 7 | 42 |
| BFA | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 5 | 5 | 5 | 5 | 5 | 30 |
In this particular case, even if there are many missing values in our disaster dataset, we made the hypothesis that disaster events can not happen every year for every country given that these are uncontrollable and non-recurring events. Therefore the NAs that we encounter will become zeroes, implying that there have been no climatic disasters.
Code
data_question3_1[is.na(data_question3_1)] <- 0COVID19
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Code
see_missing3_2 <- data_question3_2 %>%
group_by(code) %>%
summarise(across(-c(goal1, goal10), # Exclude columns "goal1" and "goal10"
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))Conflicts
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed.Two countries have missing values, we remove them (MNE and SRB).
Code
see_missing3_3 <- data_question3_3 %>%
group_by(code) %>%
summarise(across(-c(goal1, goal10), # Exclude columns "goal1" and "goal10"
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))
data_question3_3 <- data_question3_3 %>% filter(!code %in% c("MNE","SRB"))
##### EXPORT as CSV #####
write.csv(data_question1, file = here("scripts","data","data_question1.csv"))
write.csv(data_question24, file = here("scripts","data","data_question24.csv"))
write.csv(data_question3_1, file = here("scripts","data","data_question3_1.csv"))
write.csv(data_question3_2, file = here("scripts","data","data_question3_2.csv"))
write.csv(data_question3_3, file = here("scripts","data","data_question3_3.csv"))3 Exploratory data analysis
3.1 General exploration
We display the distribution of the different SDG achievement scores, using boxplots to have an overview of the median, the range with most of the observations and the outliers.
<<<<<<< Updated upstreamCode
data_question1 <- read.csv(here("scripts","data","data_question1.csv"))
data_question24 <- read.csv(here("scripts", "data", "data_question24.csv"))
data_question2 <- read.csv(here("scripts", "data", "data_question24.csv"))
data_question3_1 <- read.csv(here("scripts", "data", "data_question3_1.csv"))
data_question3_2 <- read.csv(here("scripts", "data", "data_question3_2.csv"))
data_question3_3 <- read.csv(here("scripts", "data", "data_question3_3.csv"))
Q3.1 <- read.csv(here("scripts", "data", "data_question3_1.csv"))
Q3.2 <- read.csv(here("scripts", "data", "data_question3_2.csv"))
Q3.3 <- read.csv(here("scripts", "data", "data_question3_3.csv"))
data <- read.csv(here("scripts", "data", "all_Merge.csv"))
Correlation_overall <- data_question1 %>%
select(population:ef_regulation)
#### boxplots ####
#for goals
#dev.off()
# boxplot(Correlation_overall[2:18],
# las = 2, # Makes the axis labels perpendicular to the axis
# par(mar = c(5, 4, 4, 2) + 0.1), # Adjusts the margins to fit all labels
# cex.axis = 0.7, # Reduces the size of the axis labels
# cex.lab = 1, # Reduces the size of the x and y labels
# notch = TRUE, # Specifies whether to add notches or not
# main = "Merged goals boxplot", # Title of the boxplot
# xlab = "Goals", # X-axis label
# ylab = "Score") # Y-axis label
#boxplot per continent
data_Q1_Africa <- data_question1 %>%
filter(data_question1$continent == 'Africa')
data_Q1_Europe <- data_question1 %>%
filter(data_question1$continent == 'Europe')
data_Q1_Asia <- data_question1 %>%
filter(data_question1$continent == 'Asia')
data_Q1_Americas <- data_question1 %>%
filter(data_question1$continent == 'Americas')
data_Q1_Oceania <- data_question1 %>%
filter(data_question1$continent == 'Oceania')
#Africa
data_Q1_Africa_long <- melt(data_Q1_Africa[,8:24])
medians_AF <- data_Q1_Africa_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_AF$color <- ifelse(medians_AF$median_value > 75, "lightblue",
ifelse(medians_AF$median_value < 25, "red", 'orange'))
data_Q1_Africa_long <- data_Q1_Africa_long %>%
left_join(medians_AF, by = "variable")
bandwidth_nrd_AF <- bw.nrd(data_Q1_Africa_long$value)
AF <- ggplot(data_Q1_Africa_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_AF) +
<<<<<<< Updated upstream
scale_fill_identity() +
labs(title = "Africa SDG goals boxplot", x = "Goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Europe
data_Q1_Europe_long <- melt(data_Q1_Europe[,8:24])
medians_EU <- data_Q1_Europe_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_EU$color <- ifelse(medians_EU$median_value > 75, "lightblue",
ifelse(medians_EU$median_value < 25, "red", 'orange'))
data_Q1_Europe_long <- data_Q1_Europe_long %>%
left_join(medians_EU, by = "variable")
bandwidth_nrd_EU <- bw.nrd(data_Q1_Europe_long$value)
EU <- ggplot(data_Q1_Europe_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_EU) +
scale_fill_identity() +
labs(title = "European SDG goals boxplot", x = "Goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Asia
data_Q1_Asia_long <- melt(data_Q1_Asia[,8:24])
medians_AS <- data_Q1_Asia_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_AS$color <- ifelse(medians_AS$median_value > 75, "lightblue",
ifelse(medians_AS$median_value < 25, "red", 'orange'))
data_Q1_Asia_long <- data_Q1_Asia_long %>%
left_join(medians_AS, by = "variable")
bandwidth_nrd_AS <- bw.nrd(data_Q1_Asia_long$value)
AS <- ggplot(data_Q1_Asia_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_AS) +
scale_fill_identity() +
labs(title = "Asian SDG goals boxplot", x = "Goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Americas
data_Q1_Americas_long <- melt(data_Q1_Americas[,8:24])
medians_AM <- data_Q1_Americas_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_AM$color <- ifelse(medians_AM$median_value > 75, "lightblue",
ifelse(medians_AM$median_value < 25, "red", 'orange'))
data_Q1_Americas_long <- data_Q1_Americas_long %>%
left_join(medians_AM, by = "variable")
bandwidth_nrd_AM <- bw.nrd(data_Q1_Americas_long$value)
AM <- ggplot(data_Q1_Americas_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_AM) +
scale_fill_identity() +
labs(title = "American SDG goals boxplot", x = "Goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Oceania
data_Q1_Oceania_long <- melt(data_Q1_Oceania[,8:24])
medians_OC <- data_Q1_Oceania_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_OC$color <- ifelse(medians_OC$median_value > 75, "lightblue",
ifelse(medians_OC$median_value < 25, "red", 'orange'))
data_Q1_Oceania_long <- data_Q1_Oceania_long %>%
left_join(medians_OC, by = "variable")
bandwidth_nrd_OC <- bw.nrd(data_Q1_Oceania_long$value)
OC <- ggplot(data_Q1_Oceania_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_OC) +
scale_fill_identity() +
labs(title = "Oceanian SDG goals boxplot", x = "Goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
grid.arrange(AF,EU,AS,AM,OC, ncol = 2, nrow = 3)
# Correlation_goals <- melt(Correlation_overall[,2:18])
# ggplot(Correlation_goals, aes(x= variable, y= value)) +
# geom_violin(trim=FALSE, fill="orange") +
# labs(title="Merged goals violin boxplot",x="Goals", y = "Distribution") +
# geom_boxplot(width=0.1, outlier.size = 1) +
# scale_y_continuous(labels = scales::label_number()) + #limits = c(0, 100)
# theme_classic() +
# theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
#### WHY GOING BELOW 0 TO > 100 ?? SCORES ONLY FROM 0 TO 100Code
# Step 1: Combine all data into one data frame with a region identifier
# Melt the data
data_long <- data_question1 %>%
select(continent, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17) %>%
melt()
# Calculate medians and colors
medians <- data_long %>%
group_by(variable) %>%
summarize(median_value = median(value), .groups = 'drop')
medians$color <- ifelse(medians$median_value > 75, "lightblue",
ifelse(medians$median_value < 25, "red", 'orange'))
# Join the medians back to the long data
data_long <- left_join(data_long, medians, by = "variable")
# Calculate the bandwidth
bandwidth_nrd <- bw.nrd(data_long$value)
# Create the plot
p <- ggplot(data_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd) +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_fill_identity() +
labs(title = "SDG Goals by Region", x = "Goals", y = "Score") +
facet_grid(continent ~ ., scales = "free_y") +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
# Print the plot
print(p)We see different schemes among the different goals. Indeed some are quite homogeneous with a small spread of values (e.g. overall score, goals 2 and 8) while others have a large spread of values (e.g. goals 1 and 10). Goals 1, 3, 4, 7, 9, 10 and 13 have values across all possible percentages. Goals 2, 5, 8, 13 and 17 have extreme values situated outside the 95% confidence interval. It is interesting to see that goal 8 (decent work and economic growth) is the one with smaller spread of values, whereas goal 1 (no poverty) have the higher distance between the first and the third quartile. Goal 2 (no hunger) has a tight spread of values, but with the greater amount of outliers in the smaller values, meaning hunger is similar across most countries, but when it differs it is in very bad manner.
We now display boxplpots for the different variables of the human freedom index, and then also for our other independent variables.
<<<<<<< Updated upstreamCode
#for Human Freedom Index scores
#Africa
data_Q1_Africa_HFI_long <- melt(data_Q1_Africa[,29:40])
medians_HFI_AF <- data_Q1_Africa_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_AF$color <- ifelse(medians_HFI_AF$median_value > 7.5, "lightblue",
ifelse(medians_HFI_AF$median_value < 2.5, "red", 'orange'))
data_Q1_Africa_HFI_long <- data_Q1_Africa_HFI_long %>%
left_join(medians_HFI_AF, by = "variable")
bandwidth_nrd_HFI_AF <- bw.nrd(data_Q1_Africa_HFI_long$value)
HFI_AF <- ggplot(data_Q1_Africa_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_AF) +
scale_fill_identity() +
labs(title = "African HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Europe
data_Q1_Europe_HFI_long <- melt(data_Q1_Europe[,29:40])
medians_HFI_EU <- data_Q1_Europe_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_EU$color <- ifelse(medians_HFI_EU$median_value > 7.5, "lightblue",
ifelse(medians_HFI_EU$median_value < 2.5, "red", 'orange'))
data_Q1_Europe_HFI_long <- data_Q1_Europe_HFI_long %>%
left_join(medians_HFI_EU, by = "variable")
bandwidth_nrd_HFI_EU <- bw.nrd(data_Q1_Europe_HFI_long$value)
HFI_EU <- ggplot(data_Q1_Europe_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_EU) +
scale_fill_identity() +
labs(title = "Europe HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Asia
data_Q1_Asia_HFI_long <- melt(data_Q1_Asia[,29:40])
medians_HFI_AS <- data_Q1_Asia_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_AS$color <- ifelse(medians_HFI_AS$median_value > 7.5, "lightblue",
ifelse(medians_HFI_AS$median_value < 2.5, "red", 'orange'))
data_Q1_Asia_HFI_long <- data_Q1_Asia_HFI_long %>%
left_join(medians_HFI_AS, by = "variable")
bandwidth_nrd_HFI_AS <- bw.nrd(data_Q1_Asia_HFI_long$value)
HFI_AS <- ggplot(data_Q1_Asia_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_AS) +
scale_fill_identity() +
labs(title = "Asian HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#America
data_Q1_America_HFI_long <- melt(data_Q1_Americas[,29:40])
medians_HFI_AM <- data_Q1_America_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_AM$color <- ifelse(medians_HFI_AM$median_value > 7.5, "lightblue",
ifelse(medians_HFI_AM$median_value < 2.5, "red", 'orange'))
data_Q1_America_HFI_long <- data_Q1_America_HFI_long %>%
left_join(medians_HFI_AM, by = "variable")
bandwidth_nrd_HFI_AM <- bw.nrd(data_Q1_America_HFI_long$value)
HFI_AM <- ggplot(data_Q1_Asia_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_AM) +
scale_fill_identity() +
labs(title = "America HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Oceania
data_Q1_Oceania_HFI_long <- melt(data_Q1_Oceania[,29:40])
medians_HFI_OC <- data_Q1_Oceania_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_OC$color <- ifelse(medians_HFI_OC$median_value > 7.5, "lightblue",
ifelse(medians_HFI_OC$median_value < 2.5, "red", 'orange'))
data_Q1_Oceania_HFI_long <- data_Q1_Oceania_HFI_long %>%
left_join(medians_HFI_OC, by = "variable")
bandwidth_nrd_HFI_OC <- bw.nrd(data_Q1_Oceania_HFI_long$value)
HFI_OC <- ggplot(data_Q1_Oceania_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_OC) +
scale_fill_identity() +
labs(title = "Oceanian HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
grid.arrange(HFI_AF,HFI_EU,HFI_AS,HFI_AM,HFI_OC, ncol = 2, nrow = 3)
# boxplot(Correlation_overall[23:34],
# las = 2, # Makes the axis labels perpendicular to the axis
# par(mar = c(7, 5, 2, 1)), # Adjusts the margins to fit all labels
# cex.axis = 0.7, # Reduces the size of the axis labels
# cex.lab = 1, # Reduces the size of the x and y labels
# notch = TRUE, # Specifies whether to add notches or not
# main = "Merged Human Freedom Index scores boxplot",
# ylab = "Score") # Y-axis label
# Correlation_HFI <- melt(Correlation_overall[,23:34])
# ggplot(Correlation_HFI, aes(x= variable, y= value)) +
# geom_violin(trim=FALSE, fill="orange")+
# labs(title="Merged Human Freedom Index scores violin boxplot",x="Variables", y = "Score")+
# geom_boxplot(width=0.1, outlier.size = 1)+
# scale_y_continuous(labels = scales::label_number()) + #limits = c(0, 100)
# theme_classic() +
# theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
v1 <- ggplot(Correlation_overall, aes(x= factor(1), y= GDPpercapita)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of GDP per capita",x="GDP per capita", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
v2 <- ggplot(Correlation_overall, aes(x= factor(1), y= unemployment.rate)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of unemployment rate",x="Unemployment rate", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
v3 <- ggplot(Correlation_overall, aes(x= factor(1), y= MilitaryExpenditurePercentGDP)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of military expenditure by percentage of GDP",x="Military Expenditure", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
v4 <- ggplot(Correlation_overall, aes(x= factor(1), y= internet_usage)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of internet_usage",x="internet_usage", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
grid.arrange(v1,v2,v3,v4, ncol = 2, nrow = 2)Code
#for Human Freedom Index scores
#Africa
data_Q1_Africa_HFI_long <- melt(data_Q1_Africa[,29:40])
medians_HFI_AF <- data_Q1_Africa_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_AF$color <- ifelse(medians_HFI_AF$median_value > 7.5, "lightblue",
ifelse(medians_HFI_AF$median_value < 2.5, "red", 'orange'))
data_Q1_Africa_HFI_long <- data_Q1_Africa_HFI_long %>%
left_join(medians_HFI_AF, by = "variable")
bandwidth_nrd_HFI_AF <- bw.nrd(data_Q1_Africa_HFI_long$value)
HFI_AF <- ggplot(data_Q1_Africa_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_AF) +
scale_fill_identity() +
labs(title = "African HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Europe
data_Q1_Europe_HFI_long <- melt(data_Q1_Europe[,29:40])
medians_HFI_EU <- data_Q1_Europe_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_EU$color <- ifelse(medians_HFI_EU$median_value > 7.5, "lightblue",
ifelse(medians_HFI_EU$median_value < 2.5, "red", 'orange'))
data_Q1_Europe_HFI_long <- data_Q1_Europe_HFI_long %>%
left_join(medians_HFI_EU, by = "variable")
bandwidth_nrd_HFI_EU <- bw.nrd(data_Q1_Europe_HFI_long$value)
HFI_EU <- ggplot(data_Q1_Europe_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_EU) +
scale_fill_identity() +
labs(title = "Europe HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Asia
data_Q1_Asia_HFI_long <- melt(data_Q1_Asia[,29:40])
medians_HFI_AS <- data_Q1_Asia_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_AS$color <- ifelse(medians_HFI_AS$median_value > 7.5, "lightblue",
ifelse(medians_HFI_AS$median_value < 2.5, "red", 'orange'))
data_Q1_Asia_HFI_long <- data_Q1_Asia_HFI_long %>%
left_join(medians_HFI_AS, by = "variable")
bandwidth_nrd_HFI_AS <- bw.nrd(data_Q1_Asia_HFI_long$value)
HFI_AS <- ggplot(data_Q1_Asia_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_AS) +
scale_fill_identity() +
labs(title = "Asian HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#America
data_Q1_America_HFI_long <- melt(data_Q1_Americas[,29:40])
medians_HFI_AM <- data_Q1_America_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_AM$color <- ifelse(medians_HFI_AM$median_value > 7.5, "lightblue",
ifelse(medians_HFI_AM$median_value < 2.5, "red", 'orange'))
data_Q1_America_HFI_long <- data_Q1_America_HFI_long %>%
left_join(medians_HFI_AM, by = "variable")
bandwidth_nrd_HFI_AM <- bw.nrd(data_Q1_America_HFI_long$value)
HFI_AM <- ggplot(data_Q1_Asia_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_AM) +
scale_fill_identity() +
labs(title = "America HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
#Oceania
data_Q1_Oceania_HFI_long <- melt(data_Q1_Oceania[,29:40])
medians_HFI_OC <- data_Q1_Oceania_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
medians_HFI_OC$color <- ifelse(medians_HFI_OC$median_value > 7.5, "lightblue",
ifelse(medians_HFI_OC$median_value < 2.5, "red", 'orange'))
data_Q1_Oceania_HFI_long <- data_Q1_Oceania_HFI_long %>%
left_join(medians_HFI_OC, by = "variable")
bandwidth_nrd_HFI_OC <- bw.nrd(data_Q1_Oceania_HFI_long$value)
HFI_OC <- ggplot(data_Q1_Oceania_HFI_long, aes(x = variable, y = value, fill = color)) +
geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI_OC) +
scale_fill_identity() +
labs(title = "Oceanian HFI Scores boxplot", x = "Human Freedom Index goals", y = "Score") +
geom_boxplot(width = 0.1, outlier.size = 1, fill = 'white') +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1))
grid.arrange(HFI_AF,HFI_EU,HFI_AS,HFI_AM,HFI_OC, ncol = 2, nrow = 3)
# boxplot(Correlation_overall[23:34],
# las = 2, # Makes the axis labels perpendicular to the axis
# par(mar = c(7, 5, 2, 1)), # Adjusts the margins to fit all labels
# cex.axis = 0.7, # Reduces the size of the axis labels
# cex.lab = 1, # Reduces the size of the x and y labels
# notch = TRUE, # Specifies whether to add notches or not
# main = "Merged Human Freedom Index scores boxplot",
# ylab = "Score") # Y-axis label
# Correlation_HFI <- melt(Correlation_overall[,23:34])
# ggplot(Correlation_HFI, aes(x= variable, y= value)) +
# geom_violin(trim=FALSE, fill="orange")+
# labs(title="Merged Human Freedom Index scores violin boxplot",x="Variables", y = "Score")+
# geom_boxplot(width=0.1, outlier.size = 1)+
# scale_y_continuous(labels = scales::label_number()) + #limits = c(0, 100)
# theme_classic() +
# theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
v1 <- ggplot(Correlation_overall, aes(x= factor(1), y= GDPpercapita)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of GDP per capita",x="GDP per capita", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
v2 <- ggplot(Correlation_overall, aes(x= factor(1), y= unemployment.rate)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of unemployment rate",x="Unemployment rate", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
v3 <- ggplot(Correlation_overall, aes(x= factor(1), y= MilitaryExpenditurePercentGDP)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of military expenditure by percentage of GDP",x="Military Expenditure", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
v4 <- ggplot(Correlation_overall, aes(x= factor(1), y= internet_usage)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of internet_usage",x="internet_usage", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
grid.arrange(v1,v2,v3,v4, ncol = 2, nrow = 2)We now look at the variables in a summary table to have a more precise view of the numbers.
| X | code | year | country | continent | region | overallscore | goal1 | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal10 | goal11 | goal12 | goal13 | goal15 | goal16 | goal17 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Min. : 1 | Length:3565 | Min. :2000 | Length:3565 | Length:3565 | Length:3565 | Min. :37.4 | Min. : 0.0 | Min. :16.5 | Min. : 5.9 | Min. : 0.0 | Min. : 3.5 | Min. :23.3 | Min. : 0.1 | Min. :40.0 | Min. : 0.3 | Min. : 0.0 | Min. :20.3 | Min. :32.9 | Min. : 0.0 | Min. :26.0 | Min. :27.9 | Min. :15.1 | |
| 1st Qu.: 892 | Class :character | 1st Qu.:2005 | Class :character | Class :character | Class :character | 1st Qu.:55.0 | 1st Qu.: 44.5 | 1st Qu.:52.6 | 1st Qu.:44.3 | 1st Qu.: 55.6 | 1st Qu.:43.2 | 1st Qu.:53.0 | 1st Qu.:41.5 | 1st Qu.:64.0 | 1st Qu.:15.5 | 1st Qu.: 35.2 | 1st Qu.:55.8 | 1st Qu.:67.9 | 1st Qu.:72.9 | 1st Qu.:55.0 | 1st Qu.:51.5 | 1st Qu.:46.1 | |
| Median :1783 | Mode :character | Median :2011 | Mode :character | Mode :character | Mode :character | Median :65.5 | Median : 87.4 | Median :58.9 | Median :70.9 | Median : 80.6 | Median :58.0 | Median :65.3 | Median :65.5 | Median :70.2 | Median :29.4 | Median : 62.2 | Median :75.3 | Median :84.6 | Median :90.8 | Median :65.1 | Median :61.4 | Median :55.4 | |
| Mean :1783 | NA | Mean :2011 | NA | NA | NA | Mean :64.0 | Mean : 71.7 | Mean :58.0 | Mean :64.1 | Mean : 72.0 | Mean :56.0 | Mean :65.0 | Mean :57.9 | Mean :70.0 | Mean :37.5 | Mean : 58.3 | Mean :70.3 | Mean :79.3 | Mean :82.1 | Mean :65.0 | Mean :62.6 | Mean :55.7 | |
| 3rd Qu.:2674 | NA | 3rd Qu.:2017 | NA | NA | NA | 3rd Qu.:72.4 | 3rd Qu.: 98.8 | 3rd Qu.:65.3 | 3rd Qu.:81.4 | 3rd Qu.: 94.5 | 3rd Qu.:68.9 | 3rd Qu.:75.2 | 3rd Qu.:72.6 | 3rd Qu.:76.6 | 3rd Qu.:53.9 | 3rd Qu.: 81.6 | 3rd Qu.:85.1 | 3rd Qu.:94.1 | 3rd Qu.:97.2 | 3rd Qu.:74.3 | 3rd Qu.:74.6 | 3rd Qu.:65.1 | |
| Max. :3565 | NA | Max. :2022 | NA | NA | NA | Max. :86.8 | Max. :100.0 | Max. :83.4 | Max. :97.3 | Max. :100.0 | Max. :94.0 | Max. :95.1 | Max. :99.6 | Max. :88.7 | Max. :99.2 | Max. :100.0 | Max. :99.1 | Max. :99.0 | Max. :99.9 | Max. :97.9 | Max. :96.0 | Max. :96.8 | |
| NA | NA | NA | NA | NA | NA | NA | NA's :276 | NA | NA | NA | NA | NA | NA | NA | NA | NA's :276 | NA | NA | NA | NA | NA | NA |
3.2 Focus on the influence of the factors over the SDG scores
<<<<<<< Updated upstreamUsing our cleaned dataset, we first want to observe how each of our variables are correlated with the others. For that, we will use a heatmap. Given that most of our variables are not normally distributed, we will use the Spearman method to calculate the correlation.
Code
#### Correlations between variables Heatmap ####
Correlation_overall <-data_question1 %>% # selection of the numerical data
select(population:ef_regulation)
cor_matrix_sper <- # calculation of the correlation matrix
cor(Correlation_overall, method = "spearman", use = "everything")
cor_melted <- # wide to long transformation
melt(cor_matrix_sper)
ggplot(data = cor_melted, aes(Var1, Var2, fill = value)) +
geom_tile() +
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1, 1), space = "Lab",
name="Spearman\nCorrelation") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
axis.text.y = element_text(size = 8)) +
coord_fixed() +
labs(x = '', y = '', title = 'Correlation Matrix Heatmap')
#do 3 different heatmaps : goals on goals, goals on other variables except goals, variables on variables (except goals)In the correlation matrix heatmap, we observe that goals 1 to 11 are predominantly positively correlated. Conversely, goals 12 and 13 exhibit negative correlations with most variables, except between themselves where they are strongly correlated. Additionally, there is a notable strong correlation among personal freedom variables (pf), reflecting scores from the Human Freedom Index on movement, religion, assembly, and expression.
In order to have an overview of the relationship between our independent variables and the SDG overall score, we make several graphs containing the Spearman correlation coefficient between the variable, the scatter plots describing the relationship between the variables, as well as the distribution of each variable.
Code
#### Spearman's correlation coeff ####
panel.hist <- function(x, ...){
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col = "lightgreen", ...)
}
panel.cor <- function(x, y, digits = 2, prefix = "", cex.cor, ...){
usr <- par("usr"); on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
r <- cor(x, y, method = "spearman")
txt <- format(c(r, 0.123456789), digits = digits)[1]
txt <- paste0(prefix, txt)
if(missing(cex.cor)) cex.cor <- 0.8/strwidth(txt)
text(0.5, 0.5, txt, cex = cex.cor * r)
}
# # Independent variables
pairs(Correlation_overall[,c("overallscore", "unemployment.rate", "GDPpercapita", "MilitaryExpenditurePercentGDP", "internet_usage")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of various variables")After importing our our cleaned data, we looked first at the correlations between our numerical variables.
Code
#### Correlations between variables ####
sdg_scores2 <- data_question1[, c('goal1', 'goal2', 'goal3', 'goal4', 'goal5', 'goal6',
'goal7', 'goal8', 'goal9', 'goal10', 'goal11', 'goal12',
'goal13', 'goal15', 'goal16', 'goal17')]
Correlation_overall <- data_question1 %>%
select(population:ef_regulation)
#before computing Pearson -> logarithmic transformation may be required for some variables with high skewness
# Calculating skewness for each variable
Correlation_overall_skew <- Correlation_overall
Correlation_overall_sqrt <- Correlation_overall
skewness_values <- sapply(Correlation_overall_skew, e1071::skewness)
# Identifying highly skewed variables
highly_skewed_vars <- names(skewness_values[abs(skewness_values) > 1])
highly_skewed_vars_sqrt <- names(skewness_values[abs(skewness_values) > 1])
# Applying logarithmic transformation
Correlation_overall_skew[highly_skewed_vars] <- lapply(Correlation_overall_skew[highly_skewed_vars], function(x) log1p(x))
#applying squart root transformation
Correlation_overall_sqrt[highly_skewed_vars_sqrt] <- lapply(Correlation_overall_sqrt[highly_skewed_vars_sqrt], function(x) sqrt(x))
new_skewness_values <- sapply(Correlation_overall_skew[highly_skewed_vars], e1071::skewness)
new_skewness_values_sqrt <- sapply(Correlation_overall_sqrt[highly_skewed_vars_sqrt], e1071::skewness)
#après transformation, il reste toujours beaucoup de variables avec un high skewness values print(new_skewness_values), print(new_skewness_values_sqrt)
cor_matrix_log <- cor(Correlation_overall_skew, use = "everything")
cor_matrix_sqrt <- cor(Correlation_overall_sqrt, use = "everything")
cor_matrix_sper <- cor(Correlation_overall, method = "spearman", use = "everything")
datatable(cor_matrix_log,
options = list(
pageLength = 10,
class = "hover",
searchHighlight = TRUE,
columnDefs = list(
list(targets = "_all",
render = JS(
"function(data, type, row, meta){",
" if(type === 'display'){",
" return parseFloat(data).toFixed(2)",
" }",
" return data;",
"}")))),
rownames = FALSE)By doing so, we obtain a lot of positive and negative correlations. To help us to better understand and having a overall vision of the situation, we used the following heatmap.
Code
#### Heatmap ####
cor_melted <- melt(cor_matrix_sper)
ggplot(data = cor_melted, aes(Var1, Var2, fill = value)) +
geom_tile() +
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1, 1), space = "Lab",
name="Spearman\nCorrelation") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
axis.text.y = element_text(size = 8)) +
coord_fixed() +
labs(x = '', y = '', title = 'Correlation Matrix Heatmap')
#do 3 different heatmaps : goals on goals, goals on other variables except goals, variables on variables (except goals)The overall SDG achievement score is highly correlated with the percentage of people using the internet (r=.79) and GDP per capita (r=.60). The unemployement rate as well as the military expenditures in percentage of GDP per capita do not seem to play a role. However, this is only for the overall score.
The overall SDG achievement score is highly correlated with “personal freedom: law” (p=.69) and “personal freedom: identity” (p=.62). The other dimensions of personal freedom do not seem to have important influence. Regarding the distribution of the personal freedom variables, we notice that except for law, all have right-skewed distributions meaning that most of the countries have high scores.
The overall SDG achievement score is highly correlated with “economical freedom: legal” (p=.77), “economical trade: legal” (p=.67) and “economical freedom: money” (p=.6), while the other dimensions of economic freedom do not seem to have important influence. Regarding the distribution of the economic freedom variables, we notice more heterogeneous distributions and scores across the various countries than for personal freedom.
3.2.1 Looking at SDGs
ADD GREEN GRAPH HERE TO HAVE ALSO THE DISTRIBUTION
As we can see in the graph, most of our goals are correlated toghether. We will nown perform a PCA analysis to see how our variables are explained. You can see below the Scree plot of our PCA analysis.
Code
#### PCA and PCA Scree plot####
myPCA_g <- PCA(data_question1[,9:24], graph = FALSE)
fviz_eig(myPCA_g,
addlabels = TRUE) +
theme_minimal()As we can see, Dimension 1 explain already more than 60% of the variations in our data. With Dimension 2, it goes up to around 70%. We can now plot our data with our two firsts dimensions.
Code
#### PCA Biplot ####
fviz_pca_biplot(myPCA_g,
label="var",
col.var="dodgerblue3",
geom="point",
pointsize = 0.1,
labelsize = 5) +
theme_minimal()Concerning the SDG goals, we conclude that most of our variables are going along the 1st component, except the goals 10 and 15 that are rather uncorrelated with the dimension 1. In addition, as seen before, the goals 12 and 13 are negatively correlated to the other goals. With a eigenvalue bigger than 1 for the first two components, we conclude that there are only 2 dimensions to take into account, according to the Kaiser-Guttman’s rule. Nevertheless, they are explaining less than 80% of cumulated variance.
3.2.2 Looking at the HFI scores
Code
#### PCA and PCA Scree plot####
myPCA_s <- PCA(data_question1[,29:40], graph = FALSE)
fviz_eig(myPCA_s,
addlabels = TRUE) +
theme_minimal()Code
#### PCA Biplot ####
fviz_pca_biplot(myPCA_s,
label="var",
col.var="dodgerblue3",
geom="point",
pointsize = 0.1,
labelsize = 5) +
theme_minimal()Now concerning the Human Freedom Index scores, most of the variables are positively correlated to the dimension 1, slightly less for the PF religion and security, and finaly the EF government variable is uncorrelated to the dimension 1. With a eigenvalue bigger than 1 for the three first components, we conclude that there are 3 dimensions to take into account. Nevertheless, again, they are explaining less than 80% of cumulated variance.
Code
#### Kmean clustering ####
data1_scaled <- scale(Correlation_overall)
rownames(data1_scaled) <- seq_along(row.names(data1_scaled))
fviz_nbclust(data1_scaled, kmeans, method="wss")
kmean <- kmeans(data1_scaled, 7, nstart = 25)
fviz_cluster(kmean, data=data1_scaled, repel=FALSE, depth =NULL, ellipse.type = "norm", labelsize = 0, pointsize = 0.5)
### NOW CLUSTERING BY COUNTRY? AND TAKE MEAN OF EVERY VARIABLE ON EVERY CONCERNED YEAR?Due to the large number of data, the visualization of the clusters using the kmean method is not really relevant. In addition, by clustering our data, we are trying to get group that differ from eachother but with little variation of the observations within the same cluster. Here, only 60.6% of the variance is explained by the variation between clusters. This is not enough.
=======